对话总是与某些主题有关。但是,由于预先训练的语言模型(PLM)的输入长度限制,在当前对话生成模型中同时将对话历史记录和主题信息融合在一起是具有挑战性的。为了扩展PLM可以使用的信息,我们使用具有多个融合中的频道(FID)的某些提示(FID)编码主题和对话历史信息信息,并探索三个不同频道设置的影响。在本文中,我们的实验集中在一个名为NaturalConv的特定中国数据集上,在该数据集中,对话围绕着最近的新闻。我们彻底比较了不同的对话模型和不同的FID频道设置。经验结果表明,通过将我们提出的整个通道与其他历史频道相结合,我们的方法可以在NaturalConv上实现竞争性能,从而可以从过长的文本中编码各种信息。
translated by 谷歌翻译
本文介绍了素描的现实,这种方法结合了AR素描和驱动的有形用户界面(TUI),用于双向素描交互。双向草图使虚拟草图和物理对象通过物理驱动和数字计算相互影响。在现有的AR素描中,虚拟世界和物理世界之间的关系只是一个方向 - 虽然物理互动会影响虚拟草图,但虚拟草图对物理对象或环境没有返回效果。相反,双向素描相互作用允许草图和驱动的tuis之间的无缝耦合。在本文中,我们采用桌面大小的小型机器人(Sony Toio)和基于iPad的AR素描工具来演示该概念。在我们的系统中,在iPad上绘制和模拟的虚拟草图(例如,线,墙壁,摆和弹簧)可以移动,动画,碰撞和约束物理Toio机器人,就像虚拟草图和物理对象存在于同一空间中一样通过AR和机器人运动之间的无缝耦合。本文贡献了一组新型的互动和双向AR素描的设计空间。我们展示了一系列潜在的应用,例如有形的物理教育,可探索的机制,儿童有形游戏以及通过素描的原位机器人编程。
translated by 谷歌翻译
深度学习推荐模型(DLRMS)已广泛应用于互联网公司。DLRM的嵌入表太大,无法完全适合GPU内存。我们通过利用目标数据集的ID频率统计信息来动态管理CPU和GPU内存空间中的嵌入式表的基于GPU的软件缓存方法。我们提出的软件缓存以同步更新方式有效地在GPU上培训整个DLRM。它还与广泛使用的混合平行训练方法相结合,将其缩放到多个GPU。评估我们的原型系统表明,我们只能保留GPU中嵌入参数的1.5%,以获得体面的端到端训练速度。
translated by 谷歌翻译
深神经网络(DNN)的记忆效果在许多最先进的标签噪声学习方法中起着枢轴作用。为了利用这一财产,通常采用早期停止训练早期优化的伎俩。目前的方法通常通过考虑整个DNN来决定早期停止点。然而,DNN可以被认为是一系列层的组成,并且发现DNN中的后一个层对标签噪声更敏感,而其前同行是非常稳健的。因此,选择整个网络的停止点可以使不同的DNN层对抗彼此影响,从而降低最终性能。在本文中,我们建议将DNN分离为不同的部位,逐步培训它们以解决这个问题。而不是早期停止,它一次列举一个整体DNN,我们最初通过用相对大量的时期优化DNN来训练前DNN层。在培训期间,我们通过使用较少数量的时期使用较少的地层来逐步培训后者DNN层,以抵消嘈杂标签的影响。我们将所提出的方法术语作为渐进式早期停止(PES)。尽管其简单性,与早期停止相比,PES可以帮助获得更有前景和稳定的结果。此外,通过将PE与现有的嘈杂标签培训相结合,我们在图像分类基准上实现了最先进的性能。
translated by 谷歌翻译
The network architecture of end-to-end (E2E) automatic speech recognition (ASR) can be classified into several models, including connectionist temporal classification (CTC), recurrent neural network transducer (RNN-T), attention mechanism, and non-autoregressive mask-predict models. Since each of these network architectures has pros and cons, a typical use case is to switch these separate models depending on the application requirement, resulting in the increased overhead of maintaining all models. Several methods for integrating two of these complementary models to mitigate the overhead issue have been proposed; however, if we integrate more models, we will further benefit from these complementary models and realize broader applications with a single system. This paper proposes four-decoder joint modeling (4D) of CTC, attention, RNN-T, and mask-predict, which has the following three advantages: 1) The four decoders are jointly trained so that they can be easily switched depending on the application scenarios. 2) Joint training may bring model regularization and improve the model robustness thanks to their complementary properties. 3) Novel one-pass joint decoding methods using CTC, attention, and RNN-T further improves the performance. The experimental results showed that the proposed model consistently reduced the WER.
translated by 谷歌翻译
This paper describes the ESPnet Unsupervised ASR Open-source Toolkit (EURO), an end-to-end open-source toolkit for unsupervised automatic speech recognition (UASR). EURO adopts the state-of-the-art UASR learning method introduced by the Wav2vec-U, originally implemented at FAIRSEQ, which leverages self-supervised speech representations and adversarial training. In addition to wav2vec2, EURO extends the functionality and promotes reproducibility for UASR tasks by integrating S3PRL and k2, resulting in flexible frontends from 27 self-supervised models and various graph-based decoding strategies. EURO is implemented in ESPnet and follows its unified pipeline to provide UASR recipes with a complete setup. This improves the pipeline's efficiency and allows EURO to be easily applied to existing datasets in ESPnet. Extensive experiments on three mainstream self-supervised models demonstrate the toolkit's effectiveness and achieve state-of-the-art UASR performance on TIMIT and LibriSpeech datasets. EURO will be publicly available at https://github.com/espnet/espnet, aiming to promote this exciting and emerging research area based on UASR through open-source activity.
translated by 谷歌翻译
Spoken language understanding (SLU) is a task aiming to extract high-level semantics from spoken utterances. Previous works have investigated the use of speech self-supervised models and textual pre-trained models, which have shown reasonable improvements to various SLU tasks. However, because of the mismatched modalities between speech signals and text tokens, previous methods usually need complex designs of the frameworks. This work proposes a simple yet efficient unsupervised paradigm that connects speech and textual pre-trained models, resulting in an unsupervised speech-to-semantic pre-trained model for various tasks in SLU. To be specific, we propose to use unsupervised automatic speech recognition (ASR) as a connector that bridges different modalities used in speech and textual pre-trained models. Our experiments show that unsupervised ASR itself can improve the representations from speech self-supervised models. More importantly, it is shown as an efficient connector between speech and textual pre-trained models, improving the performances of five different SLU tasks. Notably, on spoken question answering, we reach the state-of-the-art result over the challenging NMSQA benchmark.
translated by 谷歌翻译
梁搜索是端到端模型的主要ASR解码算法,生成树结构化假设。但是,最近的研究表明,通过假设合并进行解码可以通过可比或更好的性能实现更有效的搜索。但是,复发网络中的完整上下文与假设合并不兼容。我们建议在RNN传感器的预测网络中使用矢量定量的长期记忆单元(VQ-LSTM)。通过与ASR网络共同培训离散表示形式,可以积极合并假设以生成晶格。我们在总机语料库上进行的实验表明,提出的VQ RNN传感器改善了具有常规预测网络的换能器的ASR性能,同时还产生了具有相同光束尺寸的Oracle Word错误率(WER)的密集晶格。其他语言模型撤退实验还证明了拟议的晶格生成方案的有效性。
translated by 谷歌翻译
与常规的基于统计参数的方法相比,已经证明了基于深度学习的歌声综合(SVS)系统可以灵活地产生更好的质量唱歌。但是,神经系统通常是渴望数据的,并且很难通过有限的公共可用培训数据来达到合理的歌唱质量。在这项工作中,我们探索了不同的数据增强方法,以促进SVS系统的培训,包括基于沥青增强和混合增强为SVS定制的几种策略。为了进一步稳定培训,我们介绍了循环一致的培训策略。在两个公开唱歌数据库上进行的广泛实验表明,我们提出的增强方法和稳定训练策略可以显着改善客观和主观评估的绩效。
translated by 谷歌翻译
语音处理系统目前不支持绝大多数语言,部分原因是低资源语言中的数据缺乏。交叉语言传输提供了一种引人注目的方法来帮助通过将高资源数据纳入低资源系统来帮助桥接这种数字鸿沟。目前的交叉算法在一些基于文本的任务和与一些低资源语言中的语音相关任务中表现出了成功。但是,缩放语音系统以支持数百个低资源语言仍未解决。为了帮助桥接这种差距,我们提出了一种语言相似性方法,可以有效地识别数百种语言的声学交叉传输对。我们展示了我们在语言家庭分类,语音识别和语音综合任务中的方法的有效性。
translated by 谷歌翻译